In [18]:
__author__ = 'Brian Merino <brian.merino@noirlab.edu>, Vinicius Placco <vinicius.placco@noirlab.edu>'
__version__ = '20240606' # yyyymmdd; version datestamp of this notebook
__keywords__ = ['gmos','gemini','stars','dragons']

Gemini GMOS starry field photometry reduction using DRAGONS Python API¶

adapted from https://dragons.readthedocs.io/projects/gmosimg-drtutorial/en/v3.1.0/ex1_gmosim_starfield_api.html¶

Table of contents¶

  • Goals
  • Summary
  • Disclaimers and attribution
  • Imports and setup
  • About the dataset
  • Downloading data for reduction
  • Set up the DRAGONS logger
  • Create File Lists
  • Create Master Bias
  • Create Master Flat Field
  • Reduce Science Images
  • Display stacked final image
  • Clean-up (optional)

Goals¶

Showcase how to reduce GMOS imaging data using the Gemini DRAGONS package on the Data Lab science platform using a custom DRAGONS kernel "DRAGONS (Py3.7)". The steps include downloading data from the Gemini archive, setting up a DRAGONS calibration service, processing bias, flats, fringe, and science frames, and creating a single combined stacked image.

Summary¶

DRAGONS is a Python-based astronomical data reduction platform written by the Gemini Science User Support Department. It can currently be used to reduce imaging data from Gemini instruments GMOS, NIRI, Flamingos 2, GSAOI, and GNIRS, as well as spectroscopic data in GMOS longslit mode. Linked here is a general list of guides, manuals, and tutorials about the use of DRAGONS.

The DRAGONS kernel has been made available in the Data Lab environment, allowing users to access the routines without being dependent on installing the software on their local machines.

In this notebook, we present an example of a DRAGONS Jupyter notebook that works in the Data Lab environment to reduce example Gemini North GMOS I-band imaging data fully. This notebook will not present all of the details of the many options available to adjust or optimize the DRAGONS GMOS data reduction process; rather, it will just show one example of a standard reduction of a GMOS imaging dataset.

The data used in this notebook example is GMOS I band imaging from the Gemini archive of a starry field from the Gemini North Hamamatsu CCD commissioning (Program: GN-2017A-SV-151).

Disclaimer & attribution¶

Disclaimers¶

Note that using the Astro Data Lab constitutes your agreement with our minimal Disclaimers.

Acknowledgments¶

If you use Astro Data Lab in your published research, please include the text in your paper's Acknowledgments section:

This research uses services or data provided by the Astro Data Lab, which is part of the Community Science and Data Center (CSDC) Program of NSF NOIRLab. NOIRLab is operated by the Association of Universities for Research in Astronomy (AURA), Inc. under a cooperative agreement with the U.S. National Science Foundation.

If you use SPARCL jointly with the Astro Data Lab platform (via JupyterLab, command-line, or web interface) in your published research, please include this text below in your paper's Acknowledgments section:

This research uses services or data provided by the SPectra Analysis and Retrievable Catalog Lab (SPARCL) and the Astro Data Lab, which are both part of the Community Science and Data Center (CSDC) Program of NSF NOIRLab. NOIRLab is operated by the Association of Universities for Research in Astronomy (AURA), Inc. under a cooperative agreement with the U.S. National Science Foundation.

In either case please cite the following papers:

  • Data Lab concept paper: Fitzpatrick et al., "The NOAO Data Laboratory: a conceptual overview", SPIE, 9149, 2014, https://doi.org/10.1117/12.2057445

  • Astro Data Lab overview: Nikutta et al., "Data Lab - A Community Science Platform", Astronomy and Computing, 33, 2020, https://doi.org/10.1016/j.ascom.2020.100411

If you are referring to the Data Lab JupyterLab / Jupyter Notebooks, cite:

  • Juneau et al., "Jupyter-Enabled Astrophysical Analysis Using Data-Proximate Computing Platforms", CiSE, 23, 15, 2021, https://doi.org/10.1109/MCSE.2021.3057097

If publishing in a AAS journal, also add the keyword: \facility{Astro Data Lab}

And if you are using SPARCL, please also add \software{SPARCL} and cite:

  • Juneau et al., "SPARCL: SPectra Analysis and Retrievable Catalog Lab", Conference Proceedings for ADASS XXXIII, 2024

https://doi.org/10.48550/arXiv.2401.05576

The NOIRLab Library maintains lists of proper acknowledgments to use when publishing papers using the Lab's facilities, data, or services.

For this notebook specifically, please acknowledge:

  • DRAGONS publication: Labrie et al., "DRAGONS - Data Reduction for Astronomy from Gemini Observatory North and South", ASPC, 523, 321L

  • DRAGONS open source software publication

Importing Python libraries¶

In [2]:
import warnings
import glob

from gempy.adlibrary import dataselect
from gempy.utils import logutils

from recipe_system import cal_service
from recipe_system.reduction.coreReduce import Reduce

from astropy.io import fits
from astropy.wcs import WCS
from astropy.utils.exceptions import AstropyWarning

import matplotlib.pyplot as plt
from matplotlib.colors import LogNorm

warnings.simplefilter('ignore', category=AstropyWarning)

About the dataset¶

The data used for this tutorial is a dithered sequence on a starry field.

The table below contains a summary of the dataset:

Observation Type File name(s) Purpose and Exposure (seconds)
Science N20170614S0201-205 10 s, i-band
Bias N20170613S0180-184
Bias N20170615S0534-538
Twilight Flats N20170702S0178-182 40 to 16 s, i-band

Downloading the data¶

Downloading I-band images from the Gemini archive to the current working directory. This step only needs to be executed once.

If you run this notebook for the first time and need to download the dataset, set the variable "download=True". The notebook will not redownload the dataset if it is set to False. This will become particularly useful if you run the notebooks more than once.

In [3]:
%%bash 

# create file that lists FITS files to be downloaded
echo "\
http://archive.gemini.edu/file/N20170613S0180.fits
http://archive.gemini.edu/file/N20170613S0181.fits
http://archive.gemini.edu/file/N20170613S0182.fits
http://archive.gemini.edu/file/N20170613S0183.fits
http://archive.gemini.edu/file/N20170613S0184.fits
http://archive.gemini.edu/file/N20170614S0201.fits
http://archive.gemini.edu/file/N20170614S0202.fits
http://archive.gemini.edu/file/N20170614S0203.fits
http://archive.gemini.edu/file/N20170614S0204.fits
http://archive.gemini.edu/file/N20170614S0205.fits
http://archive.gemini.edu/file/N20170615S0534.fits
http://archive.gemini.edu/file/N20170615S0535.fits
http://archive.gemini.edu/file/N20170615S0536.fits
http://archive.gemini.edu/file/N20170615S0537.fits
http://archive.gemini.edu/file/N20170615S0538.fits
http://archive.gemini.edu/file/N20170702S0178.fits
http://archive.gemini.edu/file/N20170702S0179.fits
http://archive.gemini.edu/file/N20170702S0180.fits
http://archive.gemini.edu/file/N20170702S0181.fits
http://archive.gemini.edu/file/N20170702S0182.fits\
" > gmos_im_star.list
In [4]:
%%bash

download="True"

if [ $download == "True" ]; then
    wget --no-check-certificate -N -q -i gmos_im_star.list

else
    echo "Skipping download. To download the data set used in this notebook, set download=True."
fi

Setting up the DRAGONS logger¶

DRAGONS comes with a local calibration manager that uses the same calibration association rules as the Gemini Observatory Archive. This allows reduce to make requests to a local light-weight database for matching processed calibrations when needed to reduce a dataset.

This tells the system where to put the calibration database. This database will keep track of the processed calibrations we are going to send to it.

In [5]:
logutils.config(file_name='gmos_data_reduction.log')
caldb = cal_service.set_local_database()
caldb.init("w")
In [6]:
all_files = glob.glob('*.fits')
all_files.sort()

Create file lists¶

This data set contains science and calibration frames. For some programs, it could have different observed targets and exposure times depending on how you organize your raw data.

The DRAGONS data reduction pipeline does not organize the data for you. You have to do it. DRAGONS provides tools to help you with that.

The first step is to create lists that will be used in the data reduction process. For that, we use dataselect. Please refer to the dataselect documentation for details regarding its usage.

List of biases

In [7]:
list_of_biases = dataselect.select_data(
    all_files,
    ['BIAS'],
    []
)

List of flats

If your dataset has flats obtained with more than one filter, you can add the --expr 'filter_name=="i"' expression to get only the flats obtained within the i-band. For example:

In [8]:
list_of_flats = dataselect.select_data(
     all_files,
     ['FLAT'],
     [],
     dataselect.expr_parser('filter_name=="i"')
)

List of science data

In [9]:
list_of_science = dataselect.select_data(
    all_files,
    [],
    ['CAL'],
    dataselect.expr_parser('(observation_class=="science" and filter_name=="i")')
)

Create a master bias¶

We start the data reduction by creating a master bias for the science data. It can be created and added to the calibration database using the commands below. The master bias will have the name of the first bias with the suffix _bias.fits

In [10]:
reduce_bias = Reduce()
reduce_bias.files.extend(list_of_biases)
reduce_bias.runr()
All submitted files appear valid:
N20170613S0180.fits ... N20170615S0538.fits, 10 files submitted.
Ginga not installed, use other viewer, or no viewer

Create a master flat field¶

Twilight flat images are used to produce an imaging master flat and the result is added to the calibration database.

The master flat will have the name of the first twilight flat file with the suffix _flat.fits.

In [11]:
reduce_flats = Reduce()
reduce_flats.files.extend(list_of_flats)
reduce_flats.runr()

Reduce science images¶

Once our calibration files are processed and added to the database, we can run reduce on our science data.

This command will generate bias and flat corrected files and will stack them. If a fringe frame is needed, this command will apply the correction. The stacked image will have the _stack suffix.

The output stack units are in electrons (header keyword BUNIT=electrons). The output stack is stored in a multi-extension FITS (MEF) file. The science signal is in the "SCI" extension, the variance is in the "VAR" extension, and the data quality plane (mask) is in the "DQ" extension.

Each reduced science image will have the original name with the suffix _image.fits.

In [12]:
reduce_science = Reduce()
reduce_science.files.extend(list_of_science)
reduce_science.runr()

Display the stacked image¶

In [13]:
image_file = "N20170614S0201_image.fits"
hdu_list = fits.open(image_file)
wcs = WCS(hdu_list[1].header)
hdu_list.info()
Filename: N20170614S0201_image.fits
No.    Name      Ver    Type      Cards   Dimensions   Format
  0  PRIMARY       1 PrimaryHDU     214   ()      
  1  SCI           1 ImageHDU       148   (3245, 2264)   float32   
  2  VAR           1 ImageHDU       148   (3245, 2264)   float32   
  3  DQ            1 ImageHDU       148   (3245, 2264)   int16 (rescales to uint16)   
  4  PROVENANCE    1 BinTableHDU     17   7R x 4C   [28A, 128A, 128A, 128A]   
  5  PROVHISTORY    1 BinTableHDU     17   18R x 4C   [128A, 288A, 28A, 28A]   
In [14]:
image_data = fits.getdata(image_file, ext=1)
print(image_data.shape)
(2264, 3245)
In [15]:
plt.figure(figsize = (10,10))
plt.subplot(projection=wcs)
plt.imshow(image_data,cmap='gray',norm=LogNorm(vmin=0.01, vmax=1000000),origin='lower')
plt.xlabel('Right Ascension [hh:mm:ss]',fontsize=14,fontweight='bold')
plt.ylabel('Declination [degree]',fontsize=14,fontweight='bold')
plt.show()

Optional: remove duplicate calibrations and remove raw data (uncomment lines before running)¶

In [17]:
# %%bash

# cp N20170614S0201_image.fits final.fits
# rm -r calibrations
# rm gmos_im_star.list gmos_data_reduction.log N2017*
# mv final.fits N20170614S0201_image.fits
In [ ]: